06. When Feature Importance is Inconsistent
When feature importance measures are inconsistent
There are many of methods for interpreting machine learning models, and for measuring feature importance. Many of these methods can be inconsistent, which means that the features that are most important may not always be given the highest feature importance score. We noticed this in the prior coding exercise, where there were two equally important features that form the “AND” operator, but one was given a feature importance of 0.33 because it was used for splitting the tree first, and the other was given a score of 0.67 because it was used for splitting second.
This is the motivation for using the latest feature attribution method, Shapley Additive Explanations, which we’ll see next.
If you wish to explore the concept of consistent feature attribution further, here’s a blog post that discusses some of the inconsistency seen in feature importance calculation methods. Interpretable Machine Learning with XGBoost